On Convergence of Binary Trust-Region Steepest Descent

نویسندگان

چکیده

Binary trust-region steepest descent (BTR) and combinatorial integral approximation (CIA) are two recently investigated approaches for the solution of optimization problems with distributed binary-/discrete-valued variables (control functions). We show improved convergence results BTR by imposing a compactness assumption that is similar to theory CIA. As corollary we conclude also constitutes algorithm on continuous relaxation its iterates converge weakly-$^*$ stationary points latter. provide computational validate our findings. In addition, observe regularizing effect BTR, which explore means hybridization CIA BTR.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

R-Linear Convergence of Limited Memory Steepest Descent

The limited memory steepest descent method (LMSD) proposed by Fletcher is an extension of the Barzilai-Borwein “two-point step size” strategy for steepest descent methods for solving unconstrained optimization problems. It is known that the Barzilai-Borwein strategy yields a method with an R-linear rate of convergence when it is employed to minimize a strongly convex quadratic. This paper exten...

متن کامل

Global Convergence of Steepest Descent for Quadratic Functions

This paper analyzes the effect of momentum on steepest descent training for quadratic performance functions. Some global convergence conditions of the steepest descent algorithm are obtained by directly analyzing the exact momentum equations for quadratic cost functions. Those conditions can be directly derived from the parameters (different from eigenvalues that are used in the existed ones.) ...

متن کامل

Steepest Descent

The steepest descent method has a rich history and is one of the simplest and best known methods for minimizing a function. While the method is not commonly used in practice due to its slow convergence rate, understanding the convergence properties of this method can lead to a better understanding of many of the more sophisticated optimization methods. Here, we give a short introduction and dis...

متن کامل

A Geometric Convergence Theory for the Preconditioned Steepest Descent Iteration

Preconditioned gradient iterations for very large eigenvalue problems are efficient solvers with growing popularity. However, only for the simplest preconditioned eigensolver, namely the preconditioned gradient iteration (or preconditioned inverse iteration) with fixed step size, sharp non-asymptotic convergence estimates are known. These estimates require a properly scaled preconditioner. In t...

متن کامل

Steepest descent on factor graphs

x f(x, θ) log f(x, θ) exists for all θ and θ. In principle, one can apply the sum-product algorithm in order to find (1), which involves the following two steps [2]: 1. Determine f(θ) by sum-product message passing. 2. Maximization step: compute θmax △ = argmaxθ f(θ). This procedure is often not feasible, since • When the variable x is continuous, the sum-product rule may lead to intractable in...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of nonsmooth analysis and optimization

سال: 2023

ISSN: ['2700-7448']

DOI: https://doi.org/10.46298/jnsao-2023-10164